2 Amazon Simple Storage Service (AWS S3)

Plugin adds the s3upload method to the common io.File class, you can call this method passing a simple closure configuring the upload request.

2.1 General plugin config

You can configure the plugin in Config.groovy or per upload, doing this in Config.groovy, allow you to do it one time for the entire application, and when in a specific file upload, you can overwrite it.

AWS Credentials

You can set your access-key and secret-key with the config keys below

grails.plugin.aws.s3.config.accessKey = "your-access-key-here"
grails.plugin.aws.s3.config.secretKey = "your-secret-key-here"

Bucket name

To set a default bucket, that will be used to all file upload use the config below

grails.plugin.aws.s3.config.bucket    = "grails-plugin-test"

During the first upload on this bucket, it will be created if does not exist.

ACL (file permission)

The permissions that will be granted on this file, you can use:

To configure public access as default to all file uploads, use this:

grails.plugin.aws.s3.config.acl = "public"

If you like to set private access to your files, you should config this way:

grails.plugin.aws.s3.config.acl = "private"

RRS - Reduced Redundancy Storage

RRS stored files provides a cheaper storage with 99.99% durability instead of 99.999999999% as the default provided by AWS S3. More information here: http://aws.amazon.com/about-aws/whats-new/2010/05/19/announcing-amazon-s3-reduced-redundancy-storage/

This is disabled by default, if you like to set RRS enabled for all uploads, use this config key:

grails.plugin.aws.s3.config.rrs = true

2.2 Uploading files

As described in the beggining, the plugin adds a s3upload(...) method to File class, so you'll just need to call this method passing a closure with overwritten config options and other ones, if you do not want to overwrite, just leave this and the plugin will catch from the Config.groovy default options.

This plugin uses jets3t (http://jets3t.s3.amazonaws.com/index.html) to handle file upload, and detects the file contentType using jmimemagic (http://sourceforge.net/projects/jmimemagic/).

Check the examples below:

Simple File upload

def uploadedFile = new File("/tmp/test.txt").s3upload {
	path "folder/to/my/file/"
}

This way your test.txt file will be uploaded to

<default-bucket>.s3.amazonaws.com/folder/to/my/file/test.txt

If you want to overwrite config during the file upload, check this guide next section

2.2.1 Setting file virual path

S3 does not support paths or buckets inside other buckets, to solve this and keep your files organized, you can use the path method inside the config closure. Doing this, the plugin will set a metadata into this file telling AWS that this file is virtually in a folder that does not exist.

The effect is exactly like in a regular folder. For example, doing the upload below:

def uploadedFile = new File("/tmp/profile-picture.jpg").s3upload {
	bucket "my-aws-app"
	path "pictures/user/profile/"
}

The file will be stored and available in the following url:

http://my-aws-app.s3.amazonaws.com/pictures/user/profile/profile-picture.jpg

And using the AWS S3 console, the files will visually be inside folders either. Some third-party apps is already using this feature to show "folders".

2.2.2 Overwriting AWS credentials

Just call the credentials method inside the upload closure, and this credentials will be used (for this upload only). Example:

def uploadedFile = new File("/tmp/test.txt").s3upload {
	credentials "my-other-access-key", "my-other-secret-key"
}

2.2.3 Overwriting bucket to file upload

You can call the bucket method and define witch different bucket (from default) will be used. This bucket will be created if does not exist.

def uploadedFile = new File("/tmp/test.txt").s3upload {
	bucket "other-bucket"
}

This file will be uploaded to

other-bucket.s3.amazonaws.com/test.txt

Remember, when plugin created a non pre-existent bucket, it will be created in the default US region. If you like to set a different location, just pass a second string parameter containing the region string. For example, to set this bucket creation in Europe region:

def uploadedFile = new File("/tmp/test.txt").s3upload {
	bucket "bucket-not-yet-created-in-europe", "EU"
}

2.2.4 ACL (file permission)

The permissions that will be granted on this file, you can use the same values shown in "General Plugin Config" topic on this guide.

def uploadedFile = new File("/tmp/test.txt").s3upload {
	acl "private"
}

2.2.5 RRS - Reduced Redundancy Storage

If some specifically file you like to use a different RRS setting, call the rrs method in the closure, passing true or false, as you wish

def uploadedFile = new File("/tmp/test.txt").s3upload {
	rrs false
}

2.2.6 Setting File Metadata

AWS S3 files can store user metadata, doing this is simple as setting a metadata map to file upload

def uploadedFile = new File("/tmp/test.txt").s3upload {
	metadata [user-id: 123, username: 'johndoe', registered-date: new Date().format('dd/MM/yyyy')]
}

2.4 Using Encrypted AWS Credentials

You can use encrypted AWS credentials with this plugin. Doing this, your access/secret keys won't be stored inside your app.

To use this, you have to define your encrypted credentials in resources.xml or resources.groovy file and later, pass it to credentials method inside upload credentials.

resources.xml

<bean id="s3Credential" class="org.jets3t.service.security.AWSCredentials" factory-method="load">
    <constructor-arg value="password" />
    <constructor-arg value="./grails-app/conf/s3credentials.encrypted" />
</bean>

Later, you'll inject it on your controller

def s3Credential

and in your file upload

def uploadedFile = new File("/tmp/test.txt").s3upload {
	credentials s3Credentials
}

This way, the upload will connect with AWS using the encrypted credentials stored in "./grails-app/conf/s3credentials.encrypted"